156 research outputs found

    What Your Data Didn’t Tell You the First Time Around: Advanced Analytic Approaches to Longitudinal Analyses

    Get PDF
    The present article describes the gap that exists between traditional data analysis techniques and more sophisticated methods that tend to be used more commonly among researchers outside of the study of violence against women. We briefly characterize growth models and person-centered analyses and describe the growing body of work in violence research that has applied these methods. Through an example from our own application of one of these techniques—latent class growth analysis—we highlight the ways that violence against women researchers may benefit from applying these more sophisticated methods to their own data, both past and present

    Under the influence::Using natural language in interactive storytelling

    Get PDF
    Interacting in natural language with virtual actors is an important aspect of the development of future Interactive Storytelling systems. We describe a paradigm for speech interfaces in interactive storytelling based on the notion of influence. In this paradigm, the user is mainly a spectator who is however able to interfere with the course of action by issuing advice to the characters. This is achieved by recognising corresponding speech acts and mapping them to the plans which implement characters' behaviours in the story. We discuss some examples based on a preliminary, yet fully implemented, prototype

    Explanations of Black-Box Model Predictions by Contextual Importance and Utility

    Full text link
    The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods

    Proof Explanation in the DR-DEVICE System

    Get PDF
    Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension

    Ada and Grace: Direct Interaction with Museum Visitors

    Full text link

    Software-Spezifikation durch halbformale, anschauliche Modelle

    Get PDF
    Der Beitrag erörtert die Frage, welche Ansätze zur Spezifikation grundsätzlich in Frage kommen, und begründet, warum in der industriellen Praxis das Prinzip der halbformalen Spezifikation auf der Basis anschaulicher Modelle vorteilhaft ist. Die Beispiele und die am Schluss wiedergegebenen Erfahrungen stammen aus unserer Arbeit mit dem Spezifikationssystem SPADES, das auf dem Prinzip der halbformalen Beschreibung beruht

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Integration of DFDs into a UML - based model-driven engineering approach

    Get PDF
    The main aim of this article is to discuss how the functional and the object-oriented views can be inter-played to represent the various modeling perspectives of embedded systems.We discuss whether the object-oriented modeling paradigm, the predominant one to develop software at the present time, is also adequate for modeling embedded software and how it can be used with the functional paradigm.More specifically, we present how the main modeling tool of the traditional structured methods, data flow diagrams, can be integrated in an object-oriented development strategy based on the unified modeling language. The rationale behind the approach is that both views are important for modeling purposes in embedded systems environments, and thus a combined and integrated model is not only useful, but also fundamental for developing complex systems. The approach was integrated in amodel-driven engineering process, where tool support for the models used was provided. In addition, model transformations have been specified and implemented to automate the process.We exemplify the approach with an IPv6 router case study.FEDER -Fundação para a Ciência e a Tecnologia(HH-02-383

    Conversational Interfaces for Explainable AI: A Human-Centered Approach

    Get PDF
    One major goal of Explainable Artificial Intelligence (XAI), in order to enhance trust in technology, is to enable the user to enquire information and explanation about its functionality directly from an intelligent agent. We propose conversational interfaces (CI) to be the perfect setting, since they are intuitive for humans and computationally processible. While there are many approaches addressing technical issues of this human-agent communication problem, the user perspective appears to be widely neglected. With the purpose of better requirement understanding and identification of implicit expectations from a human-centered view, a Wizard of Oz experiment was conducted, where participants tried to elicit basic information from a pretended artificial agent (What are your capabilities?). The hypothesis that users pursue fundamentally different strategies could be verified with the help of Conversation Analysis. Results illustrate the vast variety in human communication and disclose both requirements of users and obstacles in the implementation of protocols for interacting agents. Finally, we infer essential indications for the implementation of such a CI

    Motion Rail: A Virtual Reality Level Crossing Training Application

    Get PDF
    This paper presents the development and usability testing of a Virtual Reality (VR) based system named 'Motion Rail' for training children on railway crossing safety. The children are to use a VR head mounted device and a controller to navigate the VR environment to perform a level crossing task and they will receive instant feedback on pass or failure on a display in the VR environment. Five participants consisting of two male and three females were considered for the usability test. The outcomes of the test was promising, as the children were very engaging and will like to adopt this training approach in future safety training
    • …
    corecore